28 research outputs found

    Permutohedral Attention Module for Efficient Non-Local Neural Networks

    Get PDF
    Medical image processing tasks such as segmentation often require capturing non-local information. As organs, bones, and tissues share common characteristics such as intensity, shape, and texture, the contextual information plays a critical role in correctly labeling them. Segmentation and labeling is now typically done with convolutional neural networks (CNNs) but the context of the CNN is limited by the receptive field which itself is limited by memory requirements and other properties. In this paper, we propose a new attention module, that we call Permutohedral Attention Module (PAM), to efficiently capture non-local characteristics of the image. The proposed method is both memory and computationally efficient. We provide a GPU implementation of this module suitable for 3D medical imaging problems. We demonstrate the efficiency and scalability of our module with the challenging task of vertebrae segmentation and labeling where context plays a crucial role because of the very similar appearance of different vertebrae.Comment: Accepted at MICCAI-201

    Diffusion MRI of the facial-vestibulocochlear nerve complex: a prospective clinical validation study

    Get PDF
    Objectives: Surgical planning of vestibular schwannoma surgery would benefit greatly from a robust method of delineating the facial-vestibulocochlear nerve complex with respect to the tumour. This study aimed to optimise a multi-shell readout-segmented diffusion-weighted imaging (rs-DWI) protocol and develop a novel post-processing pipeline to delineate the facial-vestibulocochlear complex within the skull base region, evaluating its accuracy intraoperatively using neuronavigation and tracked electrophysiological recordings./ Methods: In a prospective study of five healthy volunteers and five patients who underwent vestibular schwannoma surgery, rs-DWI was performed and colour tissue maps (CTM) and probabilistic tractography of the cranial nerves were generated. In patients, the average symmetric surface distance (ASSD) and 95% Hausdorff distance (HD-95) were calculated with reference to the neuroradiologist-approved facial nerve segmentation. The accuracy of patient results was assessed intraoperatively using neuronavigation and tracked electrophysiological recordings./ Results: Using CTM alone, the facial-vestibulocochlear complex of healthy volunteer subjects was visualised on 9/10 sides. CTM were generated in all 5 patients with vestibular schwannoma enabling the facial nerve to be accurately identified preoperatively. The mean ASSD between the annotators’ two segmentations was 1.11 mm (SD 0.40) and the mean HD-95 was 4.62 mm (SD 1.78). The median distance from the nerve segmentation to a positive stimulation point was 1.21 mm (IQR 0.81–3.27 mm) and 2.03 mm (IQR 0.99–3.84 mm) for the two annotators, respectively./ Conclusions: rs-DWI may be used to acquire dMRI data of the cranial nerves within the posterior fossa./ Clinical relevance statement: Readout-segmented diffusion-weighted imaging and colour tissue mapping provide 1–2 mm spatially accurate imaging of the facial-vestibulocochlear nerve complex, enabling accurate preoperative localisation of the facial nerve. This study evaluated the technique in 5 healthy volunteers and 5 patients with vestibular schwannoma./ Key Points: • Readout-segmented diffusion-weighted imaging (rs-DWI) with colour tissue mapping (CTM) visualised the facial-vestibulocochlear nerve complex on 9/10 sides in 5 healthy volunteer subjects./ • Using rs-DWI and CTM, the facial nerve was visualised in all 5 patients with vestibular schwannoma and within 1.21–2.03 mm of the nerve’s true intraoperative location./ • Reproducible results were obtained on different scanners

    CrossMoDA 2021 challenge: Benchmark of Cross-Modality Domain Adaptation techniques for Vestibular Schwannoma and Cochlea Segmentation

    Full text link
    Domain Adaptation (DA) has recently raised strong interests in the medical imaging community. While a large variety of DA techniques has been proposed for image segmentation, most of these techniques have been validated either on private datasets or on small publicly available datasets. Moreover, these datasets mostly addressed single-class problems. To tackle these limitations, the Cross-Modality Domain Adaptation (crossMoDA) challenge was organised in conjunction with the 24th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2021). CrossMoDA is the first large and multi-class benchmark for unsupervised cross-modality DA. The challenge's goal is to segment two key brain structures involved in the follow-up and treatment planning of vestibular schwannoma (VS): the VS and the cochleas. Currently, the diagnosis and surveillance in patients with VS are performed using contrast-enhanced T1 (ceT1) MRI. However, there is growing interest in using non-contrast sequences such as high-resolution T2 (hrT2) MRI. Therefore, we created an unsupervised cross-modality segmentation benchmark. The training set provides annotated ceT1 (N=105) and unpaired non-annotated hrT2 (N=105). The aim was to automatically perform unilateral VS and bilateral cochlea segmentation on hrT2 as provided in the testing set (N=137). A total of 16 teams submitted their algorithm for the evaluation phase. The level of performance reached by the top-performing teams is strikingly high (best median Dice - VS:88.4%; Cochleas:85.7%) and close to full supervision (median Dice - VS:92.5%; Cochleas:87.7%). All top-performing methods made use of an image-to-image translation approach to transform the source-domain images into pseudo-target-domain images. A segmentation network was then trained using these generated images and the manual annotations provided for the source image.Comment: Submitted to Medical Image Analysi

    Learning joint lesion and tissue segmentation from task-specific hetero-modal datasets

    Get PDF
    Brain tissue segmentation from multimodal MRI is a key building block of many neuroscience analysis pipelines. It could also play an important role in many clinical imaging scenarios. Established tissue segmentation approaches have however not been developed to cope with large anatomical changes resulting from pathology. The effect of the presence of brain lesions, for example, on their performance is thus currently uncontrolled and practically unpredictable. Contrastingly, with the advent of deep neural networks (DNNs), segmentation of brain lesions has matured significantly and is achieving performance levels making it of interest for clinical use. However, few existing approaches allow for jointly segmenting normal tissue and brain lesions. Developing a DNN for such joint task is currently hampered by the fact that annotated datasets typically address only one specific task and rely on a task-specific hetero-modal imaging protocol. In this work, we propose a novel approach to build a joint tissue and lesion segmentation model from task-specific hetero-modal and partially annotated datasets. Starting from a variational formulation of the joint problem, we show how the expected risk can be decomposed and optimised empirically. We exploit an upper-bound of the risk to deal with missing imaging modalities. For each task, our approach reaches comparable performance than task-specific and fully-supervised models.Comment: Accepted as an oral presentation at MIDL 2019 [arXiv:1907.08612

    Inter Extreme Points Geodesics for End-to-End Weakly Supervised Image Segmentation

    Full text link
    We introduce InExtremIS\textit{InExtremIS}, a weakly supervised 3D approach to train a deep image segmentation network using particularly weak train-time annotations: only 6 extreme clicks at the boundary of the objects of interest. Our fully-automatic method is trained end-to-end and does not require any test-time annotations. From the extreme points, 3D bounding boxes are extracted around objects of interest. Then, deep geodesics connecting extreme points are generated to increase the amount of "annotated" voxels within the bounding boxes. Finally, a weakly supervised regularised loss derived from a Conditional Random Field formulation is used to encourage prediction consistency over homogeneous regions. Extensive experiments are performed on a large open dataset for Vestibular Schwannoma segmentation. InExtremIS\textit{InExtremIS} obtained competitive performance, approaching full supervision and outperforming significantly other weakly supervised techniques based on bounding boxes. Moreover, given a fixed annotation time budget, InExtremIS\textit{InExtremIS} outperforms full supervision. Our code and data are available online.Comment: Early accept at MICCAI 2021 - code available at: https://github.com/ReubenDo/InExtremI
    corecore